Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Computation offloading strategy based on particle swarm optimization in mobile edge computing
LUO Bin, YU Bo
Journal of Computer Applications    2020, 40 (8): 2293-2298.   DOI: 10.11772/j.issn.1001-9081.2019122200
Abstract609)      PDF (961KB)(751)       Save
Computation offloading is one of the means to reduce delay and save energy in Mobile Edge Computing (MEC). Through reasonable offloading decisions, industrial costs can be greatly reduced. Aiming at the problems of long delay and high energy consumption after the deployment of MEC servers in the industrial production line, a computation offloading strategy based on Particle Swarm Optimization (PSO) was proposed, namely PSAO. First, the actual problem was modeled to a delay model and an energy consumption model. Since it was targeted at delay-sensitive applications, the model was transformed into a delay minimization problem under the constraints of energy consumption, and a penalty function was used to balance delay and energy consumption. Second, according to the PSO, the computation offloading decision vector was obtained, and each computation task was reasonably allocated to the corresponding MEC server through the centralized control method. Finally, through simulation experiments, the delay data of local offloading strategy, MEC baseline offloading strategy, Artificial Fish Swarm Algorithm (AFSA) based offloading strategy and PSAO were compared and analyzed. The average total delay of PSAO was much lower than those of the other three offloading strategies, and PSAO reduces the total cost of the original system by 20%. Experimental results show that the proposed strategy can effectively reduce the delay in MEC and balance the loads of MEC servers.
Reference | Related Articles | Metrics
Pedestrian segmentation based on Graph Cut with shape prior
HU Jianghua WANG Wenzhong LUO Bin TANG Jin
Journal of Computer Applications    2014, 34 (3): 837-840.   DOI: 10.11772/j.issn.1001-9081.2014.03.0837
Abstract632)      PDF (640KB)(364)       Save

Most of the variants of Graph Cut algorithm do not impose any shape constraints on the segmentations, rendering it difficult to obtain semantic valid segmentation results. As for pedestrian segmentation, this difficulty leads to the non-human shape of the segmented object. An improved Graph Cut algorithm combining shape priors and discriminatively learned appearance model was proposed in this paper to segment pedestrians in static images. In this approach, a large number of real pedestrian silhouettes were used to encode the a'priori shape of pedestrians, and a hierarchical model of pedestrian template was built to reduce the matching time, which would hopefully bias the segmentation results to be humanlike. A discriminative appearance model of the pedestrian was also proposed in this paper to better distinguish persons from the background. The experimental results verify the improved performance of this approach.

Related Articles | Metrics
Non-negative tensor factorization based on feedback sparse constraints
LIU Yanan XU Zhengzheng LUO Bin
Journal of Computer Applications    2013, 33 (10): 2871-2873.  
Abstract562)      PDF (415KB)(823)       Save
In order to fully use the structural information of the data, and compress the image data, the sparse constraints of the subspace (feedback) were applied to the object function of non-negative tensor factorization. Then this algorithm was used to reduce the dimension of the image sets. Finally, image classification was realized. The experimental results on the handwritten digital image database show that the proposed algorithm can effectively improve the accuracy of the image classification.
Related Articles | Metrics
Graph context and its application in graph similarity measurement
WEI Zheng TANG Jin JIANG Bo LUO Bin
Journal of Computer Applications    2013, 33 (01): 44-48.   DOI: 10.3724/SP.J.1087.2013.00044
Abstract934)      PDF (763KB)(605)       Save
Feature extraction and similarity measurement for graphs are important issues in computer vision and pattern recognition. However, traditional methods could not describe the graphs under some non-rigid transformation adequately, so a new graph feature descriptor and its similarity measurement method were proposed based on Graph Context (GC) descriptor. Firstly, a sample point set was obtained by discretely sampling. Secondly, graph context descriptor was presented based on the sample point set. At last, improved Earth Mover's Distance (EMD) was used to measure the similarity for graph context descriptor. Different from the graph edit distance methods, the proposed method did not need to define cost function which was difficult to set in those methods. The experimental results demonstrate that the proposed method performs better for the graphs under some non-rigid transformation.
Reference | Related Articles | Metrics
Feature selection algorithm based on multi-label ReliefF
HUANG Li-li TANG Jin SUN Deng-di LUO Bin
Journal of Computer Applications    2012, 32 (10): 2888-2890.   DOI: 10.3724/SP.J.1087.2012.02888
Abstract1098)      PDF (596KB)(740)       Save
The traditional feature selection algorithms are limited to single-label data. Concerning this problem, multi-label ReliefF algorithm was proposed for multi-label feature selection. For multi-label data, based on label co-occurrence, this algorithm assumed the label contribution value was equal. Combined with three new methods calculating the label contribution, the updating formula of feature weights was improved. Finally a distinguishable feature subset was selected from original features. Classification experiments demonstrate that, with the same number of features, classification accuracy of the proposed algorithm is obviously higher than the traditional approaches.
Reference | Related Articles | Metrics
Structural description model for video
FU Mao-sheng LUO Bin WU Yong-long KONG Min
Journal of Computer Applications    2012, 32 (09): 2560-2563.   DOI: 10.3724/SP.J.1087.2012.02560
Abstract914)      PDF (628KB)(582)       Save
How to present video effectively is the focus and difficulty in the field of multimedia research. A structural description model for video was proposed in this paper. Using the intrinsic structural characteristics of video, the video correlative graph model was constructed, with the shots of video as vertexes of graph, and the similarity between shots as arcs. The spectral properties of video correlative graph were extracted, including the leading eigenvalues, the eigenmode perimeter, eigenmode volume, Cheeger number, inter-mode adjacency matrices and inter-mode edge-distances. Video clustering and video surveillance experiments show the structural description model for video is feasible and effective, and the leading eigenvalues show better performance.
Reference | Related Articles | Metrics
Medical image fusion with multi-feature based on evidential theory in wavelet domain
YAO Li-sha ZHAO Hai-feng LUO Bin ZHU Zhen-yuan
Journal of Computer Applications    2012, 32 (06): 1544-1547.   DOI: 10.3724/SP.J.1087.2012.01544
Abstract1020)      PDF (640KB)(478)       Save
To address the uncertainty of weights selection in Multi-source medical image fusion process,the basic probability assignment function of the evidence is used to express decision result’s uncertainty based on Dempster-Shafer (DS) evidential theory.The detection image’s three features,which are regional variance, regional energy, regional information entropy,are used and normalized,then the basic probability assignment can be got according to the features.Image fusion rules with multi-feature based on DS evidence theory is used for high frequency components in wavelet domain. Energy of Laplace adaptive fusion rules is used for low frequency component in wavelet domain according to energy of Laplace. Experiments show that the proposed algorithm is superior to other fusion algorithms.It combines the advantages of multi-feature,reduces the uncertainty during the image fusion process and retains the details of the image in large extent.
Related Articles | Metrics
New codebook model based on HSV color space
FANG Xian-yong HE Biao LUO Bin
Journal of Computer Applications    2011, 31 (09): 2497-2501.   DOI: 10.3724/SP.J.1087.2011.02497
Abstract1352)      PDF (844KB)(419)       Save
A new codebook model was proposed based on HSV color space to eliminate the effect of complex dynamic background in the moving object detection. The merits of this new model lie in three aspects: 1) HSV color space was introduced to effectively distinguish foreground and background for false targets removal; 2) a 4-tuple codeword was proposed for fast codebook training and small storage in comparison with the traditional 9-tuple codeword; 3) a new codebook learning and updating scheme was designed for easy and fast codebook training and detection. A global quantitative evaluation method named recall-precision curve was also proposed for the video sequence. Qualitative and quantitative experiments demonstrate that the proposed codebook model can effectively detect moving object under complex dynamic background.
Related Articles | Metrics